end-to-end planning and control
Reviews: Differentiable MPC for End-to-end Planning and Control
The paper presents an approach, from an optimization perspective, to integrate both the controller and the model. I am glad to see that this work has abundant citations from both classical control theory community and reinforcement learning community. And the idea of formulating the problem is nice, however I have concerns about clarity and significance. First, the formulation of the problem does not take into account any randomness (those from environment) and does not even mention Markov decision process. In the equation 2, there is no expectation operator available.
Differentiable MPC for End-to-end Planning and Control
Amos, Brandon, Jimenez, Ivan, Sacks, Jacob, Boots, Byron, Kolter, J. Zico
This provides one way of leveraging and combining the advantages of model-free and model-based approaches. Specifically, we differentiate through MPC by using the KKT conditions of the convex approximation at a fixed point of the controller. Using this strategy, we are able to learn the cost and dynamics of a controller via end-to-end learning. Our experiments focus on imitation learning in the pendulum and cartpole domains, where we learn the cost and dynamics terms of an MPC policy class. We show that our MPC policies are significantly more data-efficient than a generic neural network and that our method is superior to traditional system identification in a setting where the expert is unrealizable.